在二阶不确定的贝叶斯网络中,条件概率仅在分布中已知,即概率上的概率。Delta方法已应用于扩展精确的一阶推理方法,以通过从贝叶斯网络得出的总和产物网络传播均值和方差,从而表征了认知不确定性或模型本身的不确定性。另外,已经证明了Polytrees的二阶信仰传播,但没有针对一般的定向无环形结构。在这项工作中,我们将循环信念传播扩展到二阶贝叶斯网络的设置,从而产生二阶循环信念传播(SOLBP)。对于二阶贝叶斯网络,SOLBP生成了与Sum-Propoduct网络生成的网络一致的推论,同时更加有效且可扩展。
translated by 谷歌翻译
最近的反对抗性系统设计问题促使贝叶斯过滤器的反向发展。例如,最近已经制定了逆卡尔曼过滤器(I-KF),以估算对手的卡尔曼滤波器跟踪估计值,因此可以预测对手的未来步骤。本文和伴随论文(第一部分)的目的是通过提出反向扩展的卡尔曼过滤器(I-EKF)来解决非线性系统中的反过滤问题。在同伴论文(第一部分)中,我们发展了I-EKF(有或没有未知输入)和I-KF(未知输入)的理论。在本文中,我们为高度非线性模型开发了这一理论,该模型采用了二阶,高斯总和和抖动的前向EKF。特别是,我们使用有界的非线性方法来得出二阶EKF的理论稳定性保证。为了解决系统模型和正向滤波器对防御者完全知道的标准I-EKF的限制,我们建议复制核基于Hilbert Space基于空间的EKF,以根据其观察值学习未知的系统动力学,可以用作该动态反向过滤器推断对手的估计值。数值实验证明了使用递归的cram \'{e} r-rao下限作为基准测试的拟议过滤器的状态估计性能。
translated by 谷歌翻译
本文解决了Terahertz(THZ)通道估计中的两个主要挑战:光束切割现象,即由于频率独立的模拟束缚器和计算复杂性,由于使用超质量数量,因此由于频率非依赖性的模拟光束器和计算复杂性。已知数据驱动的技术可以减轻此问题的复杂性,但通常需要将数据集从用户传输到中央服务器,从而带来了巨大的通信开销。在这项工作中,我们采用联合学习(FL),其中用户仅传输模型参数,而不是整个数据集,以供THZ频道估计来提高通信效率。为了准确估算横梁切开,我们提出了Beamspace支持对准技术,而无需其他硬件。与以前的作品相比,我们的方法提供了更高的频道估计准确性,以及大约$ 68 $ $ 68 $倍的通信开销。
translated by 谷歌翻译
随着第五代(5G)无线系统在全球范围内收集动力的部署,6G的可能技术正在积极的研究讨论下。特别是,机器学习(ML)在6G中的作用有望增强和帮助新兴应用,例如虚拟和增强现实,车辆自治和计算机视觉。这将导致大量的无线数据流量包括图像,视频和语音。 ML算法通过位于云服务器上的学习模型来处理这些分类/识别/估计。这需要将数据从边缘设备无线传输到云服务器。与识别步骤分开处理的渠道估计对于准确的学习绩效至关重要。为了结合通道和ML数据的学习,我们引入了隐式渠道学习以执行ML任务而不估计无线通道。在这里,ML模型通过通道腐败的数据集训练,代替名义数据。没有通道估计,该提出的方法在各种情况(例如毫米波和IEEE 802.11p车辆通道)方面的图像和语音分类任务上显示了大约60%的改善。
translated by 谷歌翻译
反对派系统中最近的进展在贝叶斯视角下,逆滤成了显着的研究兴趣。例如,估计逆基金的卡尔曼滤波器跟踪估计的兴趣与预测对手的未来步骤的目的已经导致最近反向卡尔曼滤波器(I-KF)的配方。在逆滤波的这种情况下,我们通过提出反向扩展卡尔曼滤波器(I-EKF)来解决向前滤波器的非线性过程动态和未知输入的关键挑战。通过考虑前向和逆状态空间模型中的非线性,我们通过派生I-EKF而没有未知的输入。在此过程中,还获得了I-KF的输入。然后,我们使用界限非线性和未知的矩阵方法提供理论稳定性保证。我们进一步概括了这些制剂,并对高出高斯和抖动的I-EKF的案例概括。数值实验使用递归Cram \'ER-RAO作为基准验证各种提出的逆滤波器的方法。
translated by 谷歌翻译
从其三阶统计或BISPectrum的傅立叶变换中检索信号的检索出现在各种信号处理问题中。常规方法不提供双谱的独特反演。在本文中,我们介绍了一种方法,该方法从其BISPectrum函数(BF)的至少3B $测量值,唯一地恢复具有有限频谱支持(带限量信号)的信号,其中$ B $是信号的带宽。我们的方法也延伸到有限的信号。我们提出了一种两步信任区域算法,可最大限度地减少非凸面目标函数。首先,我们通过光谱算法近似信号。然后,我们基于一系列渐变迭代序列来优化实现的初始化。数值实验表明,我们的所提出的算法能够为完整和未采样的观察估计来自其BF的带/时间有限的信号。
translated by 谷歌翻译
智能反射表面(IRS)最近对无线通信受到了极大的关注,因为它降低了常规大阵列的硬件复杂性,物理尺寸,重量和成本。但是,IRS的部署需要处理基站(BS)和用户之间的多个渠道链接。此外,BS和IRS梁形器需要关节设计,其中必须迅速重新配置IRS元素。数据驱动的技术(例如深度学习(DL))对于应对这些挑战至关重要。DL的较低计算时间和无模型性质使其与数据瑕疵和环境变化有关。在物理层上,DL已被证明可用于IRS信号检测,通道估计以及使用诸如监督,无监督和强化学习等体系结构进行主动/被动光束成型。本文提供了这些技术,用于设计基于DL的IRS辅助无线系统。
translated by 谷歌翻译
混合模拟和数字波束成形收发器在解决下一代毫米波(MM波)大规模MIMO(多输入多输出)系统中的昂贵硬件和高训练开销的挑战。然而,在混合架构中缺乏完全数字波束成形和MM波的短相干时间对信道估计施加了额外的约束。在解决这些挑战的前提是,主要集中在窄带信道上,其中采用基于优化的或贪婪算法来导出混合波束形成器。在本文中,我们介绍了用于频率选择,宽带MM波系统的信道估计和混合波束形成的深度学习(DL)方法。特别地,我们考虑大规模的MIMO正交频分复用(MIMO-OFDM)系统,并提出包括卷积神经网络(CNN)的三种不同的DL框架,其接受接收信号的原始数据作为输入和产生信道估计和混合波束形成器在输出。我们还介绍了离线和在线预测方案。数值实验表明,与目前的最先进的优化和DL方法相比,我们的方法提供了更高的频谱效率,较小的计算成本和更少的导频信号,以及对接收的导频数据中的偏差较高的差异,损坏的信道矩阵和传播环境。
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译